165 research outputs found
AM with Multiple Merlins
We introduce and study a new model of interactive proofs: AM(k), or
Arthur-Merlin with k non-communicating Merlins. Unlike with the better-known
MIP, here the assumption is that each Merlin receives an independent random
challenge from Arthur. One motivation for this model (which we explore in
detail) comes from the close analogies between it and the quantum complexity
class QMA(k), but the AM(k) model is also natural in its own right.
We illustrate the power of multiple Merlins by giving an AM(2) protocol for
3SAT, in which the Merlins' challenges and responses consist of only
n^{1/2+o(1)} bits each. Our protocol has the consequence that, assuming the
Exponential Time Hypothesis (ETH), any algorithm for approximating a dense CSP
with a polynomial-size alphabet must take n^{(log n)^{1-o(1)}} time. Algorithms
nearly matching this lower bound are known, but their running times had never
been previously explained. Brandao and Harrow have also recently used our 3SAT
protocol to show quasipolynomial hardness for approximating the values of
certain entangled games.
In the other direction, we give a simple quasipolynomial-time approximation
algorithm for free games, and use it to prove that, assuming the ETH, our 3SAT
protocol is essentially optimal. More generally, we show that multiple Merlins
never provide more than a polynomial advantage over one: that is, AM(k)=AM for
all k=poly(n). The key to this result is a subsampling theorem for free games,
which follows from powerful results by Alon et al. and Barak et al. on
subsampling dense CSPs, and which says that the value of any free game can be
closely approximated by the value of a logarithmic-sized random subgame.Comment: 48 page
0-1 Integer Linear Programming with a Linear Number of Constraints
We give an exact algorithm for the 0-1 Integer Linear Programming problem
with a linear number of constraints that improves over exhaustive search by an
exponential factor. Specifically, our algorithm runs in time
where n is the number of variables and cn is the
number of constraints. The key idea for the algorithm is a reduction to the
Vector Domination problem and a new algorithm for that subproblem
Comparing Computational Entropies Below Majority (Or: When Is the Dense Model Theorem False?)
Computational pseudorandomness studies the extent to which a random variable
looks like the uniform distribution according to a class of tests
. Computational entropy generalizes computational pseudorandomness by
studying the extent which a random variable looks like a \emph{high entropy}
distribution. There are different formal definitions of computational entropy
with different advantages for different applications. Because of this, it is of
interest to understand when these definitions are equivalent.
We consider three notions of computational entropy which are known to be
equivalent when the test class is closed under taking majorities.
This equivalence constitutes (essentially) the so-called \emph{dense model
theorem} of Green and Tao (and later made explicit by Tao-Zeigler, Reingold et
al., and Gowers). The dense model theorem plays a key role in Green and Tao's
proof that the primes contain arbitrarily long arithmetic progressions and has
since been connected to a surprisingly wide range of topics in mathematics and
computer science, including cryptography, computational complexity,
combinatorics and machine learning. We show that, in different situations where
is \emph{not} closed under majority, this equivalence fails. This in
turn provides examples where the dense model theorem is \emph{false}.Comment: 19 pages; to appear in ITCS 202
The Power of Natural Properties as Oracles
We study the power of randomized complexity classes that are given oracle access to a natural property of Razborov and Rudich (JCSS, 1997) or its special case, the Minimal Circuit Size Problem (MCSP). We show that in a number of complexity-theoretic results that use the SAT oracle, one can use the MCSP oracle instead. For example, we show that ZPEXP^{MCSP} !subseteq P/poly, which should be contrasted with the previously known circuit lower bound ZPEXP^{NP} !subseteq P/poly. We also show that, assuming the existence of Indistinguishability Obfuscators (IO), SAT and MCSP are equivalent in the sense that one has a ZPP algorithm if and only the other one does. We interpret our results as providing some evidence that MCSP may be NP-hard under randomized polynomial-time reductions
Synergy Between Circuit Obfuscation and Circuit Minimization
We study close connections between Indistinguishability Obfuscation (IO) and the Minimum Circuit Size Problem (MCSP), and argue that efficient algorithms/construction for MCSP and IO create a synergy. Some of our main results are:
- If there exists a perfect (imperfect) IO that is computationally secure against nonuniform polynomial-size circuits, then for all k ? ?: NP ? ZPP^{MCSP} ? SIZE[n^k] (MA ? ZPP^{MCSP} ? SIZE[n^k]).
- In addition, if there exists a perfect IO that is computationally secure against nonuniform polynomial-size circuits, then NEXP ? ZPEXP^{MCSP} ? P/poly.
- If MCSP ? BPP, then statistical security and computational security for IO are equivalent.
- If computationally-secure perfect IO exists, then MCSP ? BPP iff NP = ZPP.
- If computationally-secure perfect IO exists, then ZPEXP ? BPP.
To the best of our knowledge, this is the first consequence of strong circuit lower bounds from the existence of an IO. The results are obtained via a construction of an optimal universal distinguisher, computable in randomized polynomial time with access to the MCSP oracle, that will distinguish any two circuit-samplable distributions with the advantage that is the statistical distance between these two distributions minus some negligible error term. This is our main technical contribution. As another immediate application, we get a simple proof of the result by Allender and Das (Inf. Comput., 2017) that SZK ? BPP^{MCSP}
- …